AAAI AI-Alert Ethics for Dec 29, 2020
Google widely criticized after parting ways with a leading voice in AI ethics
Many Google employees and others in the tech and academic communities are furious over the sudden exit from Google of a pioneer in the study of ethics in artificial intelligence--a departure they see as a failure by an industry titan to foster an environment supportive of diversity. Timnit Gebru is known for her research into bias and inequality in AI, and in particular for a 2018 paper she coauthored with Joy Buolamwini that highlighted how poorly commercial facial-recognition software fared when attempting to classify women and people of color. Their work sparked widespread awareness of issues common in AI today, particularly when the technology is tasked with identifying anything about human beings. At Google, Gebru was the co-leader of the company's ethical AI team, and one of very few Black employees at the company overall (3.7% of Google's employees are Black according to the company's 2020 annual diversity report)-- let alone in its AI division. The research scientist is also cofounder of the group Black in AI.On Wednesday night, Gebru tweeted that she had been "immediately fired" for an email she recently sent to Google's Brain Women and Allies internal mailing list.
- Information Technology > Services (0.77)
- Law (0.72)
The New Laws of Robotics and what they might mean for AI
Way back in 1942 science fiction writer Isaac Asimov created the Three Laws of Robotics. They were written into a short story called "Runaround". Their influence on technological development has been significant and long lasting. Now, legal academic and AI expert Frank Pasquale has expanded that list. Building on Asimov's legacy, Professor Pasquale's four new laws of robotics are designed to ensure that the future development of artificial intelligence is done in the interest of humanity.
- Law (1.00)
- Education > Educational Setting > Higher Education (0.37)
- Education > Curriculum > Subject-Specific Education (0.37)
Establish AI Governance, Not Best Intentions, to Keep Companies Honest - InformationWeek
IBM, Microsoft and Amazon all recently announced they are either halting or pausing facial recognition technology initiatives. IBM even launched the Notre Dame-IBM Tech Ethics Lab, "a'convening sandbox' for affiliated scholars and industry leaders to explore and evaluate ethical frameworks and ideas." In my view, the governance that will yield ethical artificial intelligence (AI) -- specifically, unbiased decisioning based on AI -- won't spring from an academic sandbox. AI governance is a board-level issue. Boards of directors should care about AI governance because AI technology makes decisions that profoundly affect everyone.
- Information Technology (0.94)
- Law > Statutes (0.31)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.40)